With the development and progress of science and technology, the Internet of Things(IoT) has gradually entered people's lives, bringing great convenience to our lives and improving people's work efficiency. Specifically, the IoT can replace humans in jobs that they cannot perform. As a new type of IoT vehicle, the current status and trend of research on Unmanned Aerial Vehicle(UAV) is gratifying, and the development prospect is very promising. However, privacy and communication are still very serious issues in drone applications. This is because most drones still use centralized cloud-based data processing, which may lead to leakage of data collected by drones. At the same time, the large amount of data collected by drones may incur greater communication overhead when transferred to the cloud. Federated learning as a means of privacy protection can effectively solve the above two problems. However, federated learning when applied to UAV networks also needs to consider the heterogeneity of data, which is caused by regional differences in UAV regulation. In response, this paper proposes a new algorithm FedBA to optimize the global model and solves the data heterogeneity problem. In addition, we apply the algorithm to some real datasets, and the experimental results show that the algorithm outperforms other algorithms and improves the accuracy of the local model for UAVs.
translated by 谷歌翻译
语义本地化(SELO)是指使用语义信息(例如文本)在大规模遥感(RS)图像中获得最相关位置的任务。作为基于跨模式检索的新兴任务,Selo仅使用字幕级注释来实现语义级检索,这表明了其在统一下游任务方面的巨大潜力。尽管Selo已连续执行,但目前没有系统地探索并分析了这一紧急方向。在本文中,我们彻底研究了这一领域,并根据指标和测试数据提供了完整的基准,以推进SELO任务。首先,基于此任务的特征,我们提出了多个判别评估指标来量化SELO任务的性能。设计的显着面积比例,注意力转移距离和离散的注意距离可用于评估从像素级别和区域级别中产生的SELO图。接下来,为了为SELO任务提供标准评估数据,我们为多样化的,多语义的,多目标语义定位测试集(AIR-SLT)贡献。 AIR-SLT由22个大型RS图像和59个具有不同语义的测试用例组成,旨在为检索模型提供全面的评估。最后,我们详细分析了RS跨模式检索模型的SELO性能,探索不同变量对此任务的影响,并为SELO任务提供了完整的基准测试。我们还建立了一个新的范式来引用RS表达理解,并通过将其与检测和道路提取等任务相结合,证明了Selo在语义中的巨大优势。拟议的评估指标,语义本地化测试集和相应的脚本已在github.com/xiaoyuan1996/semanticlocalizationmetrics上访问。
translated by 谷歌翻译
许多数据分析任务在很大程度上依赖对表的深入了解(多维数据)。在整个任务中,都存在表字段 /列的共同使用的元数据属性。在本文中,我们确定了四个这样的分析元数据:测量/维度二分法,公共场作用,语义场类型和默认聚集函数。尽管这些元数据面临不足的监督信号的挑战,利用现有的知识和理解分布。为了将这些元数据推理为原始表,我们提出了多任务元数据模型,该模型将现场分布和知识图信息融合到预训练的表格模型中。对于模型培训和评估,我们通过使用下游任务的各种智能监督来收集分析元数据的大型语料库(来自私人电子表格和公共表格数据集的〜582K表)。我们的最佳模型的精度= 98%,命中率在TOP-1> 67%,精度> 80%和四个分析元数据推理任务的精度= 88%。它的表现优于基于规则,传统机器学习方法和预训练的表格模型的一系列基线。分析元数据模型被部署在流行的数据分析产品中,帮助下游智能功能,例如Insights挖掘,图表 /枢轴表建议和自然语言QA ...
translated by 谷歌翻译
潜在因子(LF)模型可有效地通过低级矩阵近似来表示高维和稀疏(HID)数据。Hessian无(HF)优化是利用LF模型目标函数的二阶信息的有效方法,并已用于优化二阶LF(SLF)模型。但是,SLF模型的低级表示能力在很大程度上取决于其多个超参数。确定这些超参数是耗时的,它在很大程度上降低了SLF模型的实用性。为了解决这个问题,在这项工作中提出了实用的SLF(PSLF)模型。它通过分布式粒子群优化器(DPSO)实现了超参数自加载,该粒子群(DPSO)无梯度且并行化。对真实HID数据集的实验表明,PSLF模型比在数据表示能力中的最先进模型具有竞争优势。
translated by 谷歌翻译
血压(BP)监测对于日常医疗保健至关重要,尤其是对于心血管疾病。但是,BP值主要是通过接触传感方法获得的,这是不方便且不友好的BP测量。因此,我们提出了一个有效的端到端网络,以估算面部视频中的BP值,以实现日常生活中的远程BP测量。在这项研究中,我们首先得出了短期(〜15s)面部视频的时空图。根据时空图,我们随后通过设计的血压分类器回归了BP范围,并同时通过每个BP范围内的血压计算器来计算特定值。此外,我们还制定了一种创新的过采样培训策略,以解决不平衡的数据分配问题。最后,我们在私有数据集ASPD上培训了拟议的网络,并在流行的数据集MMSE-HR上对其进行了测试。结果,拟议的网络实现了收缩压和舒张压测量的最先进的MAE,为12.35 mmHg和9.5 mmHg,这比最近的工作要好。它得出的结论是,在现实世界中,提出的方法对于基于摄像头的BP监测具有巨大潜力。
translated by 谷歌翻译
基于学习的网络入侵检测系统(NIDS)被广泛部署用于捍卫各种网络攻击。现有的基于学习的NID主要使用神经网络(NN)作为依赖于网络图克数据的质量和数量的分类器。这种基于NN的方法也很难解释提高效率和可扩展性。在本文中,我们通过组合可解释的梯度升压决策树(GBDT)和联合学习(FL)框架来设计一个新的本地全局计算范例,基于新的学习的NID。具体地,联合纤维公司由多个客户端组成,该客户端提取用于服务器的本地网络基地数据功能以培训模型和检测入侵。在Fedlorest中还提出了一种隐私增强技术,以进一步击败流动系统的隐私。关于4个网络内人数据集的广泛实验,不同任务表明,联邦纤维公司是有效,高效,可解释和可延伸的。 Fedlorest在中国大学生的协同学习和网络安全竞赛中排名第一。
translated by 谷歌翻译
Active Directory是Windows域网络的默认安全管理系统。我们研究了捍卫Active Directory样式攻击图的最短路径边缘互补问题。问题是一个后卫​​和一个攻击者之间的Stackelberg游戏。攻击图包含一个目标节点和多个条目节点。攻击者的输入节点由自然选择。后卫选择阻止一套由预算限制的边缘。然后攻击者选择最短的未阻止的攻击路径。防御者旨在最大化攻击者的预期最短路径长度,在那里占据进入节点的期望。我们观察到实际的Active Directory攻击图具有小的最大攻击路径长度,并且在结构上靠近树木。我们首先表明即使最大攻击路径长度是常数,问题仍然是W [1] $ - 对后卫预算很难。具有小的最大攻击路径长度和小预算不足以设计固定参数算法。如果我们进一步假设进入节点的数量很小,则我们推导了一个固定参数的易解算法。然后,我们通过利用树状特征来提出另外两个固定参数算法。一个是基于树分解,需要小树宽度。另一个假设少量分割节点(具有多个出流边缘的节点)。最后,最后一次算法被转换为基于图形的基于卷积神经网络的启发式,它比具有更多分裂节点的更大的图表。
translated by 谷歌翻译
神经结构搜索(NAS)已被广泛采用设计准确,高效的图像分类模型。但是,将NAS应用于新的计算机愿景任务仍然需要大量的努力。这是因为1)以前的NAS研究已经过度优先考虑图像分类,同时在很大程度上忽略了其他任务; 2)许多NAS工作侧重于优化特定于任务特定的组件,这些组件不能有利地转移到其他任务; 3)现有的NAS方法通常被设计为“Proxyless”,需要大量努力与每个新任务的培训管道集成。为了解决这些挑战,我们提出了FBNetv5,这是一个NAS框架,可以在各种视觉任务中寻找神经架构,以降低计算成本和人力努力。具体而言,我们设计1)一个简单但包容性和可转换的搜索空间; 2)用目标任务培训管道解开的多址搜索过程; 3)一种算法,用于同时搜索具有计算成本不可知的多个任务的架构到任务数。我们评估所提出的FBNetv5目标三个基本视觉任务 - 图像分类,对象检测和语义分割。 FBNETV5在单一搜索中搜索的模型在所有三个任务中都表现优于先前的议定书 - 现有技术:图像分类(例如,与FBNetv3相比,在与FBNetv3相比的同一拖鞋下的1 + 1.3%Imageet Top-1精度。 (例如,+ 1.8%较高的Ade20k Val。Miou比SegFormer为3.6倍的拖鞋),对象检测(例如,+ 1.1%Coco Val。与yolox相比,拖鞋的1.2倍的地图。
translated by 谷歌翻译
Open peer review is a growing trend in academic publications. Public access to peer review data can benefit both the academic and publishing communities. It also serves as a great support to studies on review comment generation and further to the realization of automated scholarly paper review. However, most of the existing peer review datasets do not provide data that cover the whole peer review process. Apart from this, their data are not diversified enough as they are mainly collected from the field of computer science. These two drawbacks of the currently available peer review datasets need to be addressed to unlock more opportunities for related studies. In response to this problem, we construct MOPRD, a multidisciplinary open peer review dataset. This dataset consists of paper metadata, multiple version manuscripts, review comments, meta-reviews, author's rebuttal letters, and editorial decisions. Moreover, we design a modular guided review comment generation method based on MOPRD. Experiments show that our method delivers better performance indicated by both automatic metrics and human evaluation. We also explore other potential applications of MOPRD, including meta-review generation, editorial decision prediction, author rebuttal generation, and scientometric analysis. MOPRD is a strong endorsement for further studies in peer review-related research and other applications.
translated by 谷歌翻译
Transformers are widely used in NLP tasks. However, current approaches to leveraging transformers to understand language expose one weak spot: Number understanding. In some scenarios, numbers frequently occur, especially in semi-structured data like tables. But current approaches to rich-number tasks with transformer-based language models abandon or lose some of the numeracy information - e.g., breaking numbers into sub-word tokens - which leads to many number-related errors. In this paper, we propose the LUNA framework which improves the numerical reasoning and calculation capabilities of transformer-based language models. With the number plugin of NumTok and NumBed, LUNA represents each number as a whole to model input. With number pre-training, including regression loss and model distillation, LUNA bridges the gap between number and vocabulary embeddings. To the best of our knowledge, this is the first work that explicitly injects numeracy capability into language models using Number Plugins. Besides evaluating toy models on toy tasks, we evaluate LUNA on three large-scale transformer models (RoBERTa, BERT, TabBERT) over three different downstream tasks (TATQA, TabFact, CrediTrans), and observe the performances of language models are constantly improved by LUNA. The augmented models also improve the official baseline of TAT-QA (EM: 50.15 -> 59.58) and achieve SOTA performance on CrediTrans (F1 = 86.17).
translated by 谷歌翻译